39 research outputs found

    Towards Automated Performance Analysis of Programs by Runtime Verification

    Get PDF
    This thesis makes a contribution to the field of Runtime Verification, a lightweightlightweight formal method for the analysis of computational systems. The contribution is made in multiple parts. First, a new language is introduced for the specification of properties at the source code level of programs. These properties tend to be with respect to program performance. Second, automatic monitoring and instrumentation techniques are introduced for the specification language. Third, an approach for explaining violations of these properties by program runs is introduced. Finally, the resulting body of theoretical work is implemented in an extensive ecosystem of tools for program analysis. This ecosystem is described in detail, along with its application to a real world system at CERN. The work presented in this thesis diverges from past work in the Runtime Verification community. Instead of focusing on maximising expressiveness of the specification formalism and solving the resulting monitoring and instrumentation problems, it focuses on introducing a language in which properties that often need to be checked over real-world programs can easily be expressed. In the direction of instrumentation, the source-code level of abstraction of our specification language allows an approach to instrumentation that diverges from much previous work. Many previous approaches have treated instrumentation as a separate problem from specification, usually providing a language in which one can describe how instrumentation should be performed. With our specification language, instrumentation can be performed automatically with respect to a specification. Further, an area that has received little attention in the Runtime Verification community is the analysis of verdicts resulting from monitoring programs with respect to specifications. The contributions to this area described in this thesis take the form of tools in the ecosystem. These tools enable detailed exploration of monitoring information, and mark a step towards automated generation of explanations of verdicts. Following the description of the extensive set of tools, this thesis concludes with an in depth discussion of their application to perform significant analyses of software used at CERN. Ultimately, the work described, including the theoretical foundations and implementations, forms the beginnings of a program analysis project whose aim, through continued development at CERN, is to enable detailed analysis of the performance of programs by software engineers with minimal effort

    Finding inspiration in Invitation to a Beheading: a thesis on the creation and development of a one-person play

    Get PDF
    Writing and performing a one-person play was selected as the basis for a thesis project in the spring semester of 2010 to be presented to to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Master of Fine Arts in the Department of Theatre. The thesis will include an introduction, a review of the literature that inspired the original production, the full text of both scripts that were used, a chapter on the process of discovering a new play, a section about future plans for the play, and a conclusion. The purpose of this thesis is to explore the way that an actor may go about creating, performing, and revising a one-person play

    Systematic Evaluation of Deep Learning Models for Failure Prediction

    Full text link
    With the increasing complexity and scope of software systems, their dependability is crucial. The analysis of log data recorded during system execution can enable engineers to automatically predict failures at run time. Several Machine Learning (ML) techniques, including traditional ML and Deep Learning (DL), have been proposed to automate such tasks. However, current empirical studies are limited in terms of covering all main DL types -- Recurrent Neural Network (RNN), Convolutional Neural network (CNN), and transformer -- as well as examining them on a wide range of diverse datasets. In this paper, we aim to address these issues by systematically investigating the combination of log data embedding strategies and DL types for failure prediction. To that end, we propose a modular architecture to accommodate various configurations of embedding strategies and DL-based encoders. To further investigate how dataset characteristics such as dataset size and failure percentage affect model accuracy, we synthesised 360 datasets, with varying characteristics, for three distinct system behavioral models, based on a systematic and automated generation approach. Using the F1 score metric, our results show that the best overall performing configuration is a CNN-based encoder with Logkey2vec. Additionally, we provide specific dataset conditions, namely a dataset size >350 or a failure percentage >7.5%, under which this configuration demonstrates high accuracy for failure prediction

    Kraus decomposition for chaotic environments including time-dependent subsystem Hamiltonians

    Full text link
    We derive an exact and explicit Kraus decomposition for the reduced density of a quantum system simultaneously interacting with time-dependent external fields and a chaotic environment of thermodynamic dimension. We test the accuracy of the Kraus decomposition against exact numerical results for a CNOT gate performed on two qubits of an (N+2)(N+2)-qubit statically flawed isolated quantum computer. Here the NN idle qubits comprise the finite environment. We obtain very good agreement even for small NN

    Genetic screening reveals phospholipid metabolism as a key regulator of the biosynthesis of the redox-active lipid coenzyme Q.

    Get PDF
    Mitochondrial energy production and function rely on optimal concentrations of the essential redox-active lipid, coenzyme Q (CoQ). CoQ deficiency results in mitochondrial dysfunction associated with increased mitochondrial oxidative stress and a range of pathologies. What drives CoQ deficiency in many of these pathologies is unknown, just as there currently is no effective therapeutic strategy to overcome CoQ deficiency in humans. To date, large-scale studies aimed at systematically interrogating endogenous systems that control CoQ biosynthesis and their potential utility to treat disease have not been carried out. Therefore, we developed a quantitative high-throughput method to determine CoQ concentrations in yeast cells. Applying this method to the Yeast Deletion Collection as a genome-wide screen, 30 genes not known previously to regulate cellular concentrations of CoQ were discovered. In combination with untargeted lipidomics and metabolomics, phosphatidylethanolamine N-methyltransferase (PEMT) deficiency was confirmed as a positive regulator of CoQ synthesis, the first identified to date. Mechanistically, PEMT deficiency alters mitochondrial concentrations of one-carbon metabolites, characterized by an increase in the S-adenosylmethionine to S-adenosylhomocysteine (SAM-to-SAH) ratio that reflects mitochondrial methylation capacity, drives CoQ synthesis, and is associated with a decrease in mitochondrial oxidative stress. The newly described regulatory pathway appears evolutionary conserved, as ablation of PEMT using antisense oligonucleotides increases mitochondrial CoQ in mouse-derived adipocytes that translates to improved glucose utilization by these cells, and protection of mice from high-fat diet-induced insulin resistance. Our studies reveal a previously unrecognized relationship between two spatially distinct lipid pathways with potential implications for the treatment of CoQ deficiencies, mitochondrial oxidative stress/dysfunction, and associated diseases

    Existing Infection Facilitates Establishment and Density of Malaria Parasites in Their Mosquito Vector

    Get PDF
    Very little is known about how vector-borne pathogens interact within their vector and how this impacts transmission. Here we show that mosquitoes can accumulate mixed strain malaria infections after feeding on multiple hosts. We found that parasites have a greater chance of establishing and reach higher densities if another strain is already present in a mosquito. Mixed infections contained more parasites but these larger populations did not have a detectable impact on vector survival. Together these results suggest that mosquitoes taking multiple infective bites may disproportionally contribute to malaria transmission. This will increase rates of mixed infections in vertebrate hosts, with implications for the evolution of parasite virulence and the spread of drug-resistant strains. Moreover, control measures that reduce parasite prevalence in vertebrate hosts will reduce the likelihood of mosquitoes taking multiple infective feeds, and thus disproportionally reduce transmission. More generally, our study shows that the types of strain interactions detected in vertebrate hosts cannot necessarily be extrapolated to vectors

    VyPR: a framework for automated performance analysis of Python programs

    No full text
    VyPR is a framework being developed with the aim of automating as much as possible the performance analysis of Python programs. To achieve this, it uses an analysis-by-specification approach; developers specify the performance requirements of their programs (without any modifications of the source code) and such requirements are checked at runtime. VyPR then provides analysis tools which allow developers to either determine which parts of their code may contribute to performance drops, or rule out their own code (in the case of code communicating over a network). During its short lifetime, VyPR has been used to find performance drops in the next version of the CMS Experiment’s upload service for non-event data (paper at TACAS 2019). The next year will see extensive work on VyPR’s analysis tools, with plans for the development of a powerful Python shell-based analysis library and extensions of the current web-based tool. Alongside this work, we are actively looking for new environments in which VyPR could be used. This talk will begin by introducing the analysis-by-specification approach to performance analysis and covering the process for using VyPR. It will then summarise how VyPR has already been used on the CMS Experiment, and conclude with suggestions on how it could be used on other software to construct explanations of performance drops. About the speaker Joshua Dawes completed a Bachelors degree at the University of Manchester in Computer Science and Mathematics in 2017. During his first degree, he spent a year at CERN on the Technical Student programme working on the CMS Experiment, to which he returned in 2017 as a Doctoral Student. His PhD involves developing the rigorous theoretical foundations and subsequent implementations for new program analysis tools. Such tools cater for the CMS Experiment’s evolving needs, currently focusing on Python.</p

    A Python object-oriented framework for the CMS alignment and calibration data

    No full text
    The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding data-taking period. During the second run of the LHC, new use cases were defined. For the consumption of Conditions Metadata, no common interface existed for the detector experts to use in Python-based custom scripts, resulting in many different querying and transaction management patterns. A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. The tool provides mechanisms for segmenting large sets of conditions while releasing them to the production database, allows for uniform error reporting to the client-side from the server-side and optimizes the data transfer to the server. The architecture of the new service has been developed exploiting many of the features made available by the metadata consumption framework to implement the required improvements. This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies

    Specifying Properties over Inter-Procedural, Source Code Level Behaviour of Programs

    Get PDF
    The problem of verifying a program at runtime with respect to some formal specification has led to the development of a rich collection of specification languages. These languages often have a high level of abstraction and provide sophisticated modal operators, giving a high level of expressiveness. In particular, this makes it possible to express properties concerning the source code level behaviour of programs. However, for many languages, the correspondence between events generated at the source code level and parts of the specification in question would have to be carefully defined. To enable expressing — using a temporal logic — properties over source code level behaviour without the need for this correspondence, previous work introduced Control-Flow Temporal Logic (CFTL), a specification language with a low level of abstraction with respect to the source code of programs. However, this work focused solely on the intra-procedural setting. In this paper, we address this limitation by introducing Inter-procedural CFTL, a language for expressing source code level, inter-procedural properties of program runs. We evaluate the new language, iCFTL, via application to a real-world case study

    Specifying Source Code and Signal-based Behaviour of CPS Components

    No full text
    Specifying properties over the behaviour of components of Cyber-Physical Systems usually focuses on the behaviour of signals, i.e., the behaviour of the physical part of the system, leaving the behaviour of the cyber components implicit. There have been some attempts to provide specification languages that enable more explicit reference to the behaviour of cyber components, but it remains awkward to directly express the behaviour of both cyber and physical components in the same specification, using one formalism. In this paper, we introduce a new specification language, Source Code and Signal Logic (SCSL), that 1) provides syntax specific to both signals and events originating in source code; and 2) does not require source code events to be abstracted into signals. We introduce SCSL by giving its syntax and semantics, along with examples. We then provide a comparison between SCSL and existing specification languages, using an example property, to show the benefit of using SCSL to capture certain types of properties
    corecore